You are here: Artificial Intelligence > Deep Learning > Deep Learing Tool Interface > Model Generator

Model Generator

The Model Generator dialog provides options — architectures, model types, input dimensions, and others — for generating new deep learning models for denoising, semantic segmentation, and super-resolution.

Click the New button on the Model Overview panel to open the Model Generator dialog, shown below.

Model Generator dialog

Model Generator dialog

Model Generator options

 

Description

Show architectures for

Lets you filter the available architectures to those recommended for segmentation, super-resolution, and denoising.

Semantic Segmentation… Filters the Architecture list to models best suited for semantic segmentation, which is the process of associating each pixel of an image with a class label, such as a material phase or anatomical feature.

Super-resolution… Filters the Architecture list to models best suited for super-resolution.

Denoising… Filters the Architecture list to models best suited for denoising.

Architecture

Lists the default models supplied with the Deep Learning Tool. Architectures can be filtered by type.

Architecture description

Provides a short description of the selected architecture and a link for further information (see also Deep Learning Architectures).

Model type

Lets you choose the type of deep learning model — Regression or Semantic segmentation — that you need to generate.

Regression… Regression model types are suitable for super-resolution and denoising.

Semantic segmentation… Semantic segmentation model types are suitable for binary and multi-class segmentation tasks.

Class Count

Available only for semantic segmentation model types, this parameter lets you enter the number of classes required. The minimum number of classes is '2', which would be for a binary segmentation task, while the maximum is '20' for multi-class segmentations.

Input count

Lets you include multiple inputs for training. For example, when you are working with data from simultaneous image acquisition systems you might want to select each modality as an input.

Input Dimension

Lets you choose to train slice-by-slice (2D) or to allow training over multiple slices (3D).

  • Choose '2D' if you want limit training to 2D, i.e. slice-by-slice.
  • Choose '3D' and then a number equal to or greater than '3' to train in 3D, in which case multiple slices of the input dataset will be considered for the output target (see Configuring Multi-Slice Inputs).

Note Not all data is suitable for training in 3D. For example, in cases in which features are not consistent over multiple slices.

Name

Lets you enter a name for the generated model.

Note Names are automatically formatted as: SelectedArchitecture_EnteredName.

Description

Lets you enter a description of your model.

Parameters

Lists the hyperparameters associated with the selected architecture and the default values for each.

Note Refer to the documents referenced in the Architecture Description box for information about the implemented architectures and their associated parameters.

 

Dragonfly Help Live Version